Today, I'm talking with Alex Lintner, who is the CEO of technology and software solutions at Experian, the credit reporting company. Experian is one of those multinationals that's so big and convoluted that it has multiple CEOs all over the world, so Alex and I spent quite a lot of time talking through the Decoder questions just so I could understand how Experian is structured, how it functions, and how the kinds of decisions Alex makes actually work in practice.
Political leaders could soon launch swarms of human-imitating AI agents to reshape public opinion in a way that threatens to undermine democracy, a high profile group of experts in AI and online misinformation has warned. The Nobel peace prize-winning free-speech activist, Maria Ressa, and leading AI and social science researchers from Berkeley, Harvard, Oxford, Cambridge and Yale are among a global consortium flagging the new disruptive threat posed by hard-to-detect, malicious AI swarms infesting social media and messaging channels.
One year ago this week, Silicon Valley and Wall Street were shocked by the release of China's DeepSeek mobile app, which rivaled US-based large language models like ChatGPT by showing comparable performance on key benchmarks at a fraction of the cost while using less-advanced chips. DeepSeek opened a new chapter in the US-China rivalry, with the world recognizing the competitiveness of Chinese AI models, and Beijing pouring more resources into developing its own AI ecosystem.
Salesforce-owned integration platform provider MuleSoft has added a new feature called Agent Scanners to Agent Fabric - a suite of capabilities and tools that the company launched last year to rein in the growing challenge of agent sprawl across enterprises. Agent sprawl, often a result of enterprises and their technology teams adopting multiple agentic products, can lead to the fragmentation of agents, turning their workflows redundant or siloed across teams and platforms.
The country's top internet regulator, the Cyberspace Administration of China (CAC), requires that any company launching an AI tool with "public opinion properties or social mobilization capabilities" first file it in a public database: the algorithm registry. In a submission, developers must show how their products avoid 31 categories of risk, from age and gender discrimination to psychological harm to "violating core socialist values."
Taoiseach rejects suggestion that current legislation is not strong enough to deal with issue Women's Aid removes itself from X, calling the crisis a 'tipping point' Human rights lawyer Caoilfhionn Gallagher said such sexualised abuse of children online has 'devastating' impacts Ministers are scrambling to find a way to combat an explosion of digitally created images of semi-nude women and children on the social media platform X.
It's not only law firms and legal departments that are adopting GenAI systems without fully understanding what they can and cannot do - court systems may also be tempted to adopt these tools to short circuit workloads in the face of limited resources. And that poses some risks and concerns to the rule of law, a notion that hinges on accuracy, fairness, and public perception.
Across organizations of every size, I am seeing the same operational pattern take shape. Legal teams are carrying more work, adopting more technology, and fielding increasing demands from the business, yet the underlying infrastructure has not evolved at the same pace. The result is a readiness gap that grows quietly and gradually, often in the background of an otherwise high-functioning department. The encouraging part is that the leaders who recognize the pattern early are already finding practical ways to close it.
The Well‑Architected Framework, long used by architects to benchmark cloud workloads against pillars such as operational excellence, security, reliability, performance efficiency, cost optimization, and sustainability, now incorporates AI-specific guidance across these pillars. The expanded lenses reflect AWS's recognition of the increasing complexity and societal impact of AI workloads, particularly those powered by generative models.
The flap of a butterfly's wings in South America can famously lead to a tornado in the Caribbean. The so-called butterfly effect-or "sensitive dependence on initial conditions," as it is more technically known-is of profound relevance for organizations seeking to deploy AI solutions. As systems become more and more interconnected by AI capabilities that sit across and reach into an increasing number of critical functions, the risk of cascade failures-localized glitches that ripple outward into organization-wide disruptions-grows substantially.
Enterprise IT execs know well the dangers of relying too much on third-parties, how automated decision systems need to always have a human in the loop, and the dangers of telling customers too much/too little when policy violations require an account shutdown. But a saga that played out Tuesday between Anthropic and the CEO of a Swiss cybersecurity company brings it all into a new and disturbing context.
We are now at the point where automation, machine learning and agentic orchestration can genuinely work together. This is not theory. It is already happening in defense and civilian agencies that have moved past pilots and into production, using agents that bring context, consistency and speed to complex workflows while preserving accountability. These seven principles for an agentic government give leaders a practical framework for adopting automation and AI responsibly.
The House Democratic Commission on AI and the Innovation Economy - which will convene throughout 2026 - includes Reps. Ted Lieu, D-Calif., Josh Gottheimer, D-N.J., and Valerie Foushee, D-N.C., as co-chairs. Reps. Zoe Lofgren, D-Calif., and Frank Pallone, D-N.J., will serve as ex officio co-chairs, due to their positions as ranking members of the Science, Space and Technology Committee and the Energy and Commerce Committee, respectively.
I began the year with a blunt reality check: leadership today is forged in public, under pressure, and in real time. With Donald Trump already installed as US president for his second term, markets have moved faster than at any point in my career, reacting not to speculation but to executive action, rhetoric, and resolve. The first lesson this year has burned itself into my thinking: certainty beats comfort.
In line with our AI Principles, we're thrilled to announce that New Relic has obtained ISO/IEC 42001:2023 (ISO 42001) certification in the role of an AI developer and AI provider. This achievement reflects our commitment to developing, deploying, and providing AI features both responsibly and ethically. The certification was performed by Schellman Compliance, LLC, the first ANAB accredited Certification Body based in the United States.
This Is for Everyone reads like a family newsletter: it tells you what happened, recounting the Internet's origin and evolution in great detail, but rarely explaining why the ideal of a decentralized Internet was not realized. Berners-Lee's central argument is that the web has strayed from its founding principles and been corrupted by profit-driven companies that seek to monetize our attention. But it's still possible to "fix the internet", he argues, outlining a utopian vision for how that might be done.